time-series prediction
Hybridization of Persistent Homology with Neural Networks for Time-Series Prediction: A Case Study in Wave Height
Lin, Zixin, Zulkepli, Nur Fariha Syaqina, Kasihmuddin, Mohd Shareduwan Mohd, Gobithaasan, R. U.
Time-series prediction is an active area of research across various fields, often challenged by the fluctuating influence of short-term and long-term factors. In this study, we introduce a feature engineering method that enhances the predictive performance of neural network models. Specifically, we leverage computational topology techniques to derive valuable topological features from input data, boosting the predictive accuracy of our models. Our focus is on predicting wave heights, utilizing models based on topological features within feedforward neural networks (FNNs), recurrent neural networks (RNNs), long short-term memory networks (LSTM), and RNNs with gated recurrent units (GRU). For time-ahead predictions, the enhancements in $R^2$ score were significant for FNNs, RNNs, LSTM, and GRU models. Additionally, these models also showed significant reductions in maximum errors and mean squared errors.
Feature-Based Echo-State Networks: A Step Towards Interpretability and Minimalism in Reservoir Computer
This paper proposes a novel and interpretable recurrent neural-network structure using the echo-state network (ESN) paradigm for time-series prediction. While the traditional ESNs perform well for dynamical systems prediction, it needs a large dynamic reservoir with increased computational complexity. It also lacks interpretability to discern contributions from different input combinations to the output. Here, a systematic reservoir architecture is developed using smaller parallel reservoirs driven by different input combinations, known as features, and then they are nonlinearly combined to produce the output. The resultant feature-based ESN (Feat-ESN) outperforms the traditional single-reservoir ESN with less reservoir nodes. The predictive capability of the proposed architecture is demonstrated on three systems: two synthetic datasets from chaotic dynamical systems and a set of real-time traffic data.
- Asia > Singapore (0.04)
- North America > United States > Ohio (0.04)
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
Reduced-order modeling of unsteady fluid flow using neural network ensembles
Halder, Rakesh, Ataei, Mohammadmehdi, Salehipour, Hesam, Fidkowski, Krzysztof, Maki, Kevin
The use of deep learning has become increasingly popular in reduced-order models (ROMs) to obtain low-dimensional representations of full-order models. Convolutional autoencoders (CAEs) are often used to this end as they are adept at handling data that are spatially distributed, including solutions to partial differential equations. When applied to unsteady physics problems, ROMs also require a model for time-series prediction of the low-dimensional latent variables. Long short-term memory (LSTM) networks, a type of recurrent neural network useful for modeling sequential data, are frequently employed in data-driven ROMs for autoregressive time-series prediction. When making predictions at unseen design points over long time horizons, error propagation is a frequently encountered issue, where errors made early on can compound over time and lead to large inaccuracies. In this work, we propose using bagging, a commonly used ensemble learning technique, to develop a fully data-driven ROM framework referred to as the CAE-eLSTM ROM that uses CAEs for spatial reconstruction of the full-order model and LSTM ensembles for time-series prediction. When applied to two unsteady fluid dynamics problems, our results show that the presented framework effectively reduces error propagation and leads to more accurate time-series prediction of latent variables at unseen points.
Sequential Adaptation of Radial Basis Function Neural Networks and its Application to Time-series Prediction
We develop a sequential adaptation algorithm for radial basis function (RBF) neural networks of Gaussian nodes, based on the method of succes(cid:173) sive F-Projections. This method makes use of each observation efficiently in that the network mapping function so obtained is consistent with that information and is also optimal in the least L 2-norm sense. The RBF network with the F-Projections adaptation algorithm was used for pre(cid:173) dicting a chaotic time-series. We compare its performance to an adapta(cid:173) tion scheme based on the method of stochastic approximation, and show that the F-Projections algorithm converges to the underlying model much faster.
Top 10 Deep Learning Algorithms You Should Know in 2022
Deep learning has gained massive popularity in scientific computing, and its algorithms are widely used by industries that solve complex problems. All deep learning algorithms use different types of neural networks to perform specific tasks. This article examines essential artificial neural networks and how deep learning algorithms work to mimic the human brain. Deep learning uses artificial neural networks to perform sophisticated computations on large amounts of data. It is a type of machine learning that works based on the structure and function of the human brain.
- Leisure & Entertainment (0.31)
- Health & Medicine (0.31)
Composite FORCE learning of chaotic echo state networks for time-series prediction
Li, Yansong, Hu, Kai, Nakajima, Kohei, Pan, Yongping
Echo state network (ESN), a kind of recurrent neural networks, consists of a fixed reservoir in which neurons are connected randomly and recursively and obtains the desired output only by training output connection weights. First-order reduced and controlled error (FORCE) learning is an online supervised training approach that can change the chaotic activity of ESNs into specified activity patterns. This paper proposes a composite FORCE learning method based on recursive least squares to train ESNs whose initial activity is spontaneously chaotic, where a composite learning technique featured by dynamic regressor extension and memory data exploitation is applied to enhance parameter convergence. The proposed method is applied to a benchmark problem about predicting chaotic time series generated by the Mackey-Glass system, and numerical results have shown that it significantly improves learning and prediction performances compared with existing methods.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Bonn (0.04)
- (8 more...)
Top 10 Deep Learning Algorithms Beginners Should Know in 2022
Deep learning algorithms work with almost any kind of data and require large amounts of computing power and information to solve complicated issues. It has gained massive popularity in scientific computing, and its algorithms are widely used by industries that solve complex problems. All deep learning algorithms use different types of neural networks to perform specific tasks. These algorithms attempt to draw similar conclusions as humans would by continually analyzing data with a given logical structure. Here are the top 10 deep learning algorithms that you should know as a beginner in 2022.
Top 10 Deep Learning Techniques Data Scientist Should Know in 2021. - Nexart
It used to built and tackles higher complexity, preprocessing, and data compilation. CNN consists of multiple layers and is mainly used for image processing and object detection. These layers are focus on processing and extracting different features from the data. The CNN are widely used technique to identify satellite images, process medical images, and detect anomalies. Convolution: These process derives the feature maps from input data, followed by a function applied to these maps.
Top 10 Deep Learning Algorithms One Should Know in 2021
Deep learning algorithms train machines and it uses artificial neural networks to perform sophisticated computations on large amounts of data. It is a type of machine learning that works based on the structure-function of the human brain. While deep learning algorithms feature self-learning representations, they depend upon ANNs that mirror the way the brain computes information. CNN's, also known as ConvNets, consist of multiple layers and are mainly used for image processing and object detection. Yann LeCun developed the first CNN in 1988 when it was called LeNet.
Distributed Learning and its Application for Time-Series Prediction
Nguyen, Nhuong V., Legitime, Sybille
Extreme events are occurrences whose magnitude and potential cause extensive damage on people, infrastructure, and the environment. Motivated by the extreme nature of the current global health landscape, which is plagued by the coronavirus pandemic, we seek to better understand and model extreme events. Modeling extreme events is common in practice and plays an important role in time-series prediction applications. Our goal is to (i) compare and investigate the effect of some common extreme events modeling methods to explore which method can be practical in reality and (ii) accelerate the deep learning training process, which commonly uses deep recurrent neural network (RNN), by implementing the asynchronous local Stochastic Gradient Descent (SGD) framework among multiple compute nodes. In order to verify our distributed extreme events modeling, we evaluate our proposed framework on a stock data set S\&P500, with a standard recurrent neural network. Our intuition is to explore the (best) extreme events modeling method which could work well under the distributed deep learning setting. Moreover, by using asynchronous distributed learning, we aim to significantly reduce the communication cost among the compute nodes and central server, which is the main bottleneck of almost all distributed learning frameworks. We implement our proposed work and evaluate its performance on representative data sets, such as S&P500 stock in $5$-year period. The experimental results validate the correctness of the design principle and show a significant training duration reduction upto $8$x, compared to the baseline single compute node. Our results also show that our proposed work can achieve the same level of test accuracy, compared to the baseline setting.
- North America > United States > Connecticut (0.05)
- North America > United States > New York (0.04)
- Information Technology (0.93)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.48)
- Health & Medicine > Therapeutic Area > Immunology (0.48)